AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
Quantized text generation

# Quantized text generation

Gemma 3 1b It Fast GUFF
Quantized version optimized for low-end hardware and CPU-only environments, achieving production-ready inference configurations under resource constraints
Large Language Model
G
h4shy
101
1
Qwen3 8B GGUF
MIT
ZeroWw is a quantized text generation model that uses f16 format for output and embedding tensors, while other tensors use q5_k or q6_k format, resulting in a smaller size with performance comparable to pure f16.
Large Language Model English
Q
ZeroWw
236
1
Qwen3 4B GGUF
MIT
A quantized text generation model with output and embedding tensors in f16 format, while other tensors use q5_k or q6_k quantization, resulting in a smaller size with performance comparable to the pure f16 version.
Large Language Model English
Q
ZeroWw
495
2
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase